光学计算是一种新兴技术,用于下一代高效人工智能(AI),其速度和效率超高。电磁场模拟对于光子设备和电路的设计,优化和验证至关重要。但是,昂贵的数值模拟显着阻碍了光子电路设计循环中的可扩展性和转环。最近,已经提出了物理信息的神经网络来预测具有预定义参数的部分微分方程(PDE)的单个实例的光场解。它们复杂的PDE公式和缺乏有效的参数化机制限制了其在实际模拟方案中的灵活性和概括。在这项工作中,首次提出了一个被称为Neurolight的物理敏捷神经操作员框架,以学习一个频率域的麦克斯韦PDE家族,以进行超快速的参数光子设备模拟。我们通过几种新技术来平衡神经照明的效率和概括。具体而言,我们将不同的设备离散到统一域中,代表具有紧凑型波的参数PDE,并通过掩盖的源建模编码入射光。我们使用参数效率高的跨形神经块设计模型,并采用基于叠加的增强来进行数据效率学习。通过这些协同方法,神经亮像可以概括为大量的看不见的模拟设置,比数值求解器显示了2个磁性的模拟速度,并且比先前的神经网络模型优于降低54%的预测误差,而降低了约44%的参数。 。我们的代码可在https://github.com/jeremiemelo/neurolight上找到。
translated by 谷歌翻译
短期负载预测(STLF)在电力交易市场的运营中起着重要作用。考虑到对数据隐私的日益关注,在最近的研究中,越来越多地采用了联合学习(FL)来培训公用事业公司(UCS)的STLF模型。令人鼓舞的是,在批发市场中,由于发电厂(PPS)直接访问UCS数据并不现实,因此FL绝对是可行的解决方案,可以为PPS获得准确的STLF模型。但是,由于FL的分布性质和UC之间的激烈竞争,缺陷越来越多,导致STLF模型的性能差,表明仅采用FL是不够的。在本文中,我们提出了一种DRL辅助方法,缺陷感知的联合软性参与者 - 批评者(DearFSAC),以稳健地训练PPS的准确的STLF模型,以预测精确的短期公用事业需求。首先。我们仅使用历史负载数据和时间数据设计了基于长期短期内存(LSTM)的STLF模型。此外,考虑到缺陷发生的不确定性,采用了深入的增强学习(DRL)算法来通过减轻缺陷引起的模型退化来协助FL。此外,为了更快的FL训练融合,自动编码器设计用于缩小尺寸和上载模型的质量评估。在模拟中,我们在2019年验证了赫尔辛基UCS的真实数据的方法。结果表明,无论是否发生缺陷,DearFSAC都比所有其他方法都胜过所有其他方法。
translated by 谷歌翻译
随着最近光学相变材料(PCM)的进步,光子内存中的神经科学大量已经证明了其在光学神经网络(ONN)设计中的优越性,具有接近零静态功耗,光时间延迟和紧凑的占地面积。然而,光子张量核心需要大量硬件重用来实现由于单核刻度有限的矩阵乘法。由此产生的大量PCM写入,导致严重的动态功率和压倒性的PCM,具有有限的写入耐久性。在这项工作中,我们提出了一种协同优化框架,努力,以最大限度地减少高效且可靠的光学内记忆中的整体写作工作。我们首先提出了写知感知培训,以鼓励重量块之间的相似性,并将其与训练后的优化方法相结合,以通过消除冗余写入来减少编程工作。实验表明,突出可以在具有可比性准确度的写入总数和动态功率的总数超过20倍。通过我们的努力,光子内记忆中的内蒙古大量将向机器学习中的可行应用前进,具有保存的准确性,级别更长的寿命和更低的编程能量。
translated by 谷歌翻译
由于深度学习在许多人工智能应用中显示了革命性的性能,其升级的计算需求需要用于巨大并行性的硬件加速器和改进的吞吐量。光学神经网络(ONN)是下一代神经关键组成的有希望的候选者,由于其高并行,低延迟和低能量消耗。在这里,我们设计了一个硬件高效的光子子空间神经网络(PSNN)架构,其针对具有比具有可比任务性能的前一个ONN架构的光学元件使用,区域成本和能量消耗。此外,提供了一种硬件感知培训框架,以最小化所需的设备编程精度,减少芯片区域,并提高噪声鲁棒性。我们在实验上展示了我们的PSNN在蝴蝶式可编程硅光子集成电路上,并在实用的图像识别任务中显示其实用性。
translated by 谷歌翻译
Increasing research interests focus on sequential recommender systems, aiming to model dynamic sequence representation precisely. However, the most commonly used loss function in state-of-the-art sequential recommendation models has essential limitations. To name a few, Bayesian Personalized Ranking (BPR) loss suffers the vanishing gradient problem from numerous negative sampling and predictionbiases; Binary Cross-Entropy (BCE) loss subjects to negative sampling numbers, thereby it is likely to ignore valuable negative examples and reduce the training efficiency; Cross-Entropy (CE) loss only focuses on the last timestamp of the training sequence, which causes low utilization of sequence information and results in inferior user sequence representation. To avoid these limitations, in this paper, we propose to calculate Cumulative Cross-Entropy (CCE) loss over the sequence. CCE is simple and direct, which enjoys the virtues of painless deployment, no negative sampling, and effective and efficient training. We conduct extensive experiments on five benchmark datasets to demonstrate the effectiveness and efficiency of CCE. The results show that employing CCE loss on three state-of-the-art models GRU4Rec, SASRec, and S3-Rec can reach 125.63%, 69.90%, and 33.24% average improvement of full ranking NDCG@5, respectively. Using CCE, the performance curve of the models on the test data increases rapidly with the wall clock time, and is superior to that of other loss functions in almost the whole process of model training.
translated by 谷歌翻译
In the scenario of black-box adversarial attack, the target model's parameters are unknown, and the attacker aims to find a successful adversarial perturbation based on query feedback under a query budget. Due to the limited feedback information, existing query-based black-box attack methods often require many queries for attacking each benign example. To reduce query cost, we propose to utilize the feedback information across historical attacks, dubbed example-level adversarial transferability. Specifically, by treating the attack on each benign example as one task, we develop a meta-learning framework by training a meta-generator to produce perturbations conditioned on benign examples. When attacking a new benign example, the meta generator can be quickly fine-tuned based on the feedback information of the new task as well as a few historical attacks to produce effective perturbations. Moreover, since the meta-train procedure consumes many queries to learn a generalizable generator, we utilize model-level adversarial transferability to train the meta-generator on a white-box surrogate model, then transfer it to help the attack against the target model. The proposed framework with the two types of adversarial transferability can be naturally combined with any off-the-shelf query-based attack methods to boost their performance, which is verified by extensive experiments.
translated by 谷歌翻译
Supervised Deep-Learning (DL)-based reconstruction algorithms have shown state-of-the-art results for highly-undersampled dynamic Magnetic Resonance Imaging (MRI) reconstruction. However, the requirement of excessive high-quality ground-truth data hinders their applications due to the generalization problem. Recently, Implicit Neural Representation (INR) has appeared as a powerful DL-based tool for solving the inverse problem by characterizing the attributes of a signal as a continuous function of corresponding coordinates in an unsupervised manner. In this work, we proposed an INR-based method to improve dynamic MRI reconstruction from highly undersampled k-space data, which only takes spatiotemporal coordinates as inputs. Specifically, the proposed INR represents the dynamic MRI images as an implicit function and encodes them into neural networks. The weights of the network are learned from sparsely-acquired (k, t)-space data itself only, without external training datasets or prior images. Benefiting from the strong implicit continuity regularization of INR together with explicit regularization for low-rankness and sparsity, our proposed method outperforms the compared scan-specific methods at various acceleration factors. E.g., experiments on retrospective cardiac cine datasets show an improvement of 5.5 ~ 7.1 dB in PSNR for extremely high accelerations (up to 41.6-fold). The high-quality and inner continuity of the images provided by INR has great potential to further improve the spatiotemporal resolution of dynamic MRI, without the need of any training data.
translated by 谷歌翻译
Recent studies have shown that using an external Language Model (LM) benefits the end-to-end Automatic Speech Recognition (ASR). However, predicting tokens that appear less frequently in the training set is still quite challenging. The long-tail prediction problems have been widely studied in many applications, but only been addressed by a few studies for ASR and LMs. In this paper, we propose a new memory augmented lookup dictionary based Transformer architecture for LM. The newly introduced lookup dictionary incorporates rich contextual information in training set, which is vital to correctly predict long-tail tokens. With intensive experiments on Chinese and English data sets, our proposed method is proved to outperform the baseline Transformer LM by a great margin on both word/character error rate and tail tokens error rate. This is achieved without impact on the decoding efficiency. Overall, we demonstrate the effectiveness of our proposed method in boosting the ASR decoding performance, especially for long-tail tokens.
translated by 谷歌翻译
The objective of this paper is to learn dense 3D shape correspondence for topology-varying generic objects in an unsupervised manner. Conventional implicit functions estimate the occupancy of a 3D point given a shape latent code. Instead, our novel implicit function produces a probabilistic embedding to represent each 3D point in a part embedding space. Assuming the corresponding points are similar in the embedding space, we implement dense correspondence through an inverse function mapping from the part embedding vector to a corresponded 3D point. Both functions are jointly learned with several effective and uncertainty-aware loss functions to realize our assumption, together with the encoder generating the shape latent code. During inference, if a user selects an arbitrary point on the source shape, our algorithm can automatically generate a confidence score indicating whether there is a correspondence on the target shape, as well as the corresponding semantic point if there is one. Such a mechanism inherently benefits man-made objects with different part constitutions. The effectiveness of our approach is demonstrated through unsupervised 3D semantic correspondence and shape segmentation.
translated by 谷歌翻译
Patients take care of what their teeth will be like after the orthodontics. Orthodontists usually describe the expectation movement based on the original smile images, which is unconvincing. The growth of deep-learning generative models change this situation. It can visualize the outcome of orthodontic treatment and help patients foresee their future teeth and facial appearance. While previous studies mainly focus on 2D or 3D virtual treatment outcome (VTO) at a profile level, the problem of simulating treatment outcome at a frontal facial image is poorly explored. In this paper, we build an efficient and accurate system for simulating virtual teeth alignment effects in a frontal facial image. Our system takes a frontal face image of a patient with visible malpositioned teeth and the patient's 3D scanned teeth model as input, and progressively generates the visual results of the patient's teeth given the specific orthodontics planning steps from the doctor (i.e., the specification of translations and rotations of individual tooth). We design a multi-modal encoder-decoder based generative model to synthesize identity-preserving frontal facial images with aligned teeth. In addition, the original image color information is used to optimize the orthodontic outcomes, making the results more natural. We conduct extensive qualitative and clinical experiments and also a pilot study to validate our method.
translated by 谷歌翻译